知识图嵌入(KGE)模型是一种有效且流行的方法,可以通过多关系数据来表示和理由。先前的研究表明,KGE模型对高参数设置敏感,并且合适的选择依赖于数据集。在本文中,我们探索了高参数优化(HPO),以获取非常大的知识图,其中评估单个超参数配置的成本过高。先前的研究经常通过使用各种启发式方法来避免这种成本。例如,通过在子图上进行训练或使用更少的时期。我们系统地讨论并评估了这种启发式方法和其他低成本近似技术的质量和成本节省。根据我们的发现,我们引入了Grash,这是一种有效的大规模KGE的多保真HPO算法,结合了图形和时代还原技术并以多个富裕性的储蓄率组合。我们进行了一项实验研究,发现Grash以低成本(总共三个完整的训练运行)在大图上获得最先进的结果。
translated by 谷歌翻译
参数服务器(PSS)促进用于大型机器学习任务的分布式培训。在本文中,我们认为现有的PSS是表现出非统一参数访问的任务的低效;他们的表现甚至可能落后于单节点基线。我们确定了这种非统一访问的两个主要来源:偏斜和抽样。现有的PSS不适合管理歪斜,因为它们统一地对所有参数应用相同的参数管理技术。它们对采样效率低下,因为PS对相关的随机访问感知并且无法利用局部性。为了克服这些性能限制,我们介绍了NUP,一种新的PS架构,(i)集成了多种管理技术,并采用了每个参数的合适技术,并且(ii)通过合适的采样基元和采样方案直接支持对受控质量进行采样。 - 效率折磨。在我们的实验研究中,NUPs优于现有的PSS,最多一种数量级,并在多台机器学习任务中提供了直线可扩展性。
translated by 谷歌翻译
View-dependent effects such as reflections pose a substantial challenge for image-based and neural rendering algorithms. Above all, curved reflectors are particularly hard, as they lead to highly non-linear reflection flows as the camera moves. We introduce a new point-based representation to compute Neural Point Catacaustics allowing novel-view synthesis of scenes with curved reflectors, from a set of casually-captured input photos. At the core of our method is a neural warp field that models catacaustic trajectories of reflections, so complex specular effects can be rendered using efficient point splatting in conjunction with a neural renderer. One of our key contributions is the explicit representation of reflections with a reflection point cloud which is displaced by the neural warp field, and a primary point cloud which is optimized to represent the rest of the scene. After a short manual annotation step, our approach allows interactive high-quality renderings of novel views with accurate reflection flow. Additionally, the explicit representation of reflection flow supports several forms of scene manipulation in captured scenes, such as reflection editing, cloning of specular objects, reflection tracking across views, and comfortable stereo viewing. We provide the source code and other supplemental material on https://repo-sam.inria.fr/ fungraph/neural_catacaustics/
translated by 谷歌翻译
Brain-inspired computing proposes a set of algorithmic principles that hold promise for advancing artificial intelligence. They endow systems with self learning capabilities, efficient energy usage, and high storage capacity. A core concept that lies at the heart of brain computation is sequence learning and prediction. This form of computation is essential for almost all our daily tasks such as movement generation, perception, and language. Understanding how the brain performs such a computation is not only important to advance neuroscience but also to pave the way to new technological brain-inspired applications. A previously developed spiking neural network implementation of sequence prediction and recall learns complex, high-order sequences in an unsupervised manner by local, biologically inspired plasticity rules. An emerging type of hardware that holds promise for efficiently running this type of algorithm is neuromorphic hardware. It emulates the way the brain processes information and maps neurons and synapses directly into a physical substrate. Memristive devices have been identified as potential synaptic elements in neuromorphic hardware. In particular, redox-induced resistive random access memories (ReRAM) devices stand out at many aspects. They permit scalability, are energy efficient and fast, and can implement biological plasticity rules. In this work, we study the feasibility of using ReRAM devices as a replacement of the biological synapses in the sequence learning model. We implement and simulate the model including the ReRAM plasticity using the neural simulator NEST. We investigate the effect of different device properties on the performance characteristics of the sequence learning model, and demonstrate resilience with respect to different on-off ratios, conductance resolutions, device variability, and synaptic failure.
translated by 谷歌翻译
This paper describes several improvements to a new method for signal decomposition that we recently formulated under the name of Differentiable Dictionary Search (DDS). The fundamental idea of DDS is to exploit a class of powerful deep invertible density estimators called normalizing flows, to model the dictionary in a linear decomposition method such as NMF, effectively creating a bijection between the space of dictionary elements and the associated probability space, allowing a differentiable search through the dictionary space, guided by the estimated densities. As the initial formulation was a proof of concept with some practical limitations, we will present several steps towards making it scalable, hoping to improve both the computational complexity of the method and its signal decomposition capabilities. As a testbed for experimental evaluation, we choose the task of frame-level piano transcription, where the signal is to be decomposed into sources whose activity is attributed to individual piano notes. To highlight the impact of improved non-linear modelling of sources, we compare variants of our method to a linear overcomplete NMF baseline. Experimental results will show that even in the absence of additional constraints, our models produce increasingly sparse and precise decompositions, according to two pertinent evaluation measures.
translated by 谷歌翻译
We introduce a novel way to incorporate prior information into (semi-) supervised non-negative matrix factorization, which we call differentiable dictionary search. It enables general, highly flexible and principled modelling of mixtures where non-linear sources are linearly mixed. We study its behavior on an audio decomposition task, and conduct an extensive, highly controlled study of its modelling capabilities.
translated by 谷歌翻译
与高维数据集的探索性分析(例如主成分分析(PCA))相反,邻居嵌入(NE)技术倾向于更好地保留高维数据的局部结构/拓扑。然而,保留局部结构的能力是以解释性为代价的:诸如T-分布的随机邻居嵌入(T-SNE)或统一的歧管近似和投影(UMAP)等技术没有提供拓扑结构的介绍(UMAP)(UMAP)(UMAP)(UMAP)(UMAP)(UMAP)(UMAP)。在相应的嵌入中看到的群集)结构。在这里,我们提出了基于PCA,Q-残基和Hotelling的T2贡献的化学计量学领域的不同“技巧”,并结合了新型可视化方法,从而得出了邻居嵌入的局部和全局解释。我们展示了我们的方法如何使用标准的单变量或多变量方法来识别数据点组之间的歧视性特征。
translated by 谷歌翻译
随着时间的流逝,肿瘤体积和肿瘤特征的变化是癌症治疗的重要生物标志物。在这种情况下,FDG-PET/CT扫描通常用于癌症的分期和重新分期,因为放射性标记的荧光脱氧葡萄糖在高代谢的地区进行了。不幸的是,这些具有高代谢的区域不是针对肿瘤的特异性,也可以代表正常功能器官,炎症或感染的生理吸收,在这些扫描中使详细且可靠的肿瘤分割成为一项苛刻的任务。 AUTOPET挑战赛解决了这一研究差距,该挑战提供了来自900名患者的FDG-PET/CT扫描的公共数据集,以鼓励该领域进一步改善。我们对这一挑战的贡献是由两个最先进的分割模型组成的合奏,即NN-UNET和SWIN UNETR,并以最大强度投影分类器的形式增强,该分类器的作用像是门控机制。如果它预测了病变的存在,则两种分割都是通过晚期融合方法组合的。我们的解决方案在我们的交叉验证中诊断出患有肺癌,黑色素瘤和淋巴瘤的患者的骰子得分为72.12 \%。代码:https://github.com/heiligerl/autopet_submission
translated by 谷歌翻译
背景:基于学习的深度颈部淋巴结水平(HN_LNL)自动纤维与放射疗法研究和临床治疗计划具有很高的相关性,但在学术文献中仍被研究过。方法:使用35个规划CTS的专家划分的队列用于培训NNU-NEN 3D FULLES/2D-ENEBLEN模型,用于自动分片20不同的HN_LNL。验证是在独立的测试集(n = 20)中进行的。在一项完全盲目的评估中,3位临床专家在与专家创建的轮廓的正面比较中对深度学习自动分类的质量进行了评价。对于10个病例的亚组,将观察者内的变异性与深度学习自动分量性能进行了比较。研究了Autocontour与CT片平面方向的一致性对几何精度和专家评级的影响。结果:与专家创建的轮廓相比,对CT SLICE平面调整的深度学习分割的平均盲目专家评级明显好得多(81.0 vs. 79.6,p <0.001),但没有切片平面的深度学习段的评分明显差。专家创建的轮廓(77.2 vs. 79.6,p <0.001)。深度学习分割的几何准确性与观察者内变异性(平均骰子,0.78 vs. 0.77,p = 0.064)的几何准确性无关,并且在提高水平之间的准确性方面差异(p <0.001)。与CT切片平面方向一致性的临床意义未由几何精度指标(骰子,0.78 vs. 0.78 vs. 0.78,p = 0.572)结论:我们表明可以将NNU-NENE-NET 3D-FULLRES/2D-ENEMELBEND用于HN_LNL高度准确的自动限制仅使用有限的培训数据集,该数据集非常适合在研究环境中在HN_LNL的大规模标准化自动限制。几何准确度指标只是盲人专家评级的不完善的替代品。
translated by 谷歌翻译
相关光和电子显微镜是研究细胞内部结构的强大工具。它结合了相关光(LM)和电子(EM)显微镜信息的相互益处。但是,将LM叠加到EM图像以将功能分配给结构信息的经典方法受到LM图像中可见的结构细节的巨大差异的阻碍。本文旨在研究一种优化方法,我们称之为EM引导的反卷积。它试图将荧光标记的结构自动分配给EM图像中可见的细节,以弥合两种成像模式之间的分辨率和特异性的间隙。
translated by 谷歌翻译